147 research outputs found

    Using an existing website as a queryable low-cost LOD publishing interface

    Get PDF
    Maintaining an Open Dataset comes at an extra recurring cost when it is published in a dedicated Web interface. As there is not often a direct financial return from publishing a dataset publicly, these extra costs need to be minimized. Therefore we want to explore reusing existing infrastructure by enriching existing websites with Linked Data. In this demonstrator, we advised the data owner to annotate a digital heritage website with JSON-LD snippets, resulting in a dataset of more than three million triples that is now available and officially maintained. The website itself is paged, and thus hydra partial collection view controls were added in the snippets. We then extended the modular query engine Comunica to support following page controls and extracting data from HTML documents while querying. This way, a SPARQL or GraphQL query over multiple heterogeneous data sources can power automated data reuse. While the query performance on such an interface is visibly poor, it becomes easy to create composite data dumps. As a result of implementing these building blocks in Comunica, any paged collection and enriched HTML page now becomes queryable by the query engine. This enables heterogenous data interfaces to share functionality and become technically interoperable

    A Process Framework for Designing Software Reference Architectures for Providing Tools as a Service

    Get PDF
    Product-Focused Software Process ImprovementSoftware Reference Architecture (SRA), which is a generic architecture solution for a specific type of software systems, provides foundation for the design of concrete architectures in terms of architecture design guidelines and architecture elements. The complexity and size of certain types of software systems need customized and systematic SRA design and evaluation methods. In this paper, we present a software Reference Architecture Design process Framework (RADeF) that can be used for analysis, design and evaluation of the SRA for provisioning of Tools as a Service as part of a cloud-enabled workSPACE (TSPACE). The framework is based on the state of the art results from literature and our experiences with designing software architectures for cloud-based systems. We have applied RADeF SRA design two types of TSPACE: software architecting TSPACE and software implementation TSPACE. The presented framework emphasizes on keeping the conceptual meta-model of the domain under investigation at the core of SRA design strategy and use it as a guiding tool for design, evaluation, implementation and evolution of the SRA. The framework also emphasizes to consider the nature of the tools to be provisioned and underlying cloud platforms to be used while designing SRA. The framework recommends adoption of the multi-faceted approach for evaluation of SRA and quantifiable measurement scheme to evaluate quality of the SRA. We foresee that RADeF can facilitate software architects and researchers during design, application and evaluation of a SRA and its instantiations into concrete software systems.Muhammad Aufeef Chauhan, Muhammad Ali Babar, and Christian W. Probs

    Co-Design of Business and IT Services – a Tool-Supported Approach

    Get PDF
    Service modeling is an important step in designing service-oriented systems. There are multiple levels of design because service sci-ence includes both the business rationale and the IT implementation ofthe services. As business and IT perspectives differ, the modeling tech-niques are different, and often the respective modeling languages aredisconnected or ad-hoc. We propose a new service-modeling approachfor connecting the business modeling and the web service modeling bypresenting these two perspectives in a single model. We present a multi-stage modeling process for capturing different perspectives and creatingmodels iteratively by working with levels of abstraction from higher tolower. The model is then used as an input in order to generate a RESTAPI specification in the OpenAPI format to feed the next stages of theservice life-cycle

    Environment Orientation : a structured simulation approach for agent-based complex systems

    Get PDF
    Complex systems are collections of independent agents interacting with each other and with their environment to produce emergent behaviour. Agent-based computer simulation is one of the main ways of studying complex systems. A naive approach to such simulation can fare poorly, due to large communication overhead, and due to the scope for deadlock between the interacting agents sharing a computational platform. Agent interaction can instead be considered entirely from the point of view of the environment(s) within which the agents interact. Structuring a simulation using such Environment Orientation leads to a simulation that reduces communication overhead, that is effectively deadlock-free, and yet still behaves in the manner required. Additionally the Environment Orientation architecture eases the development of more sophisticated large-scale simulations, with multiple kinds of complex agents, situated in and interacting with multiple kinds of environments. We describe the Environment Orientation simulation architecture. We report on a number of experiments that demonstrate the effectiveness of the Environment Orientation approach: a simple flocking system, a flocking system with multiple sensory environments, and a flocking system in an external environment

    Evaluating the informatics for integrating biology and the bedside system for clinical research

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Selecting patient cohorts is a critical, iterative, and often time-consuming aspect of studies involving human subjects; informatics tools for helping streamline the process have been identified as important infrastructure components for enabling clinical and translational research. We describe the evaluation of a free and open source cohort selection tool from the Informatics for Integrating Biology and the Bedside (i2b2) group: the i2b2 hive.</p> <p>Methods</p> <p>Our evaluation included the usability and functionality of the i2b2 hive using several real world examples of research data requests received electronically at the University of Utah Health Sciences Center between 2006 - 2008. The hive server component and the visual query tool application were evaluated for their suitability as a cohort selection tool on the basis of the types of data elements requested, as well as the effort required to fulfill each research data request using the i2b2 hive alone.</p> <p>Results</p> <p>We found the i2b2 hive to be suitable for obtaining estimates of cohort sizes and generating research cohorts based on simple inclusion/exclusion criteria, which consisted of about 44% of the clinical research data requests sampled at our institution. Data requests that relied on post-coordinated clinical concepts, aggregate values of clinical findings, or temporal conditions in their inclusion/exclusion criteria could not be fulfilled using the i2b2 hive alone, and required one or more intermediate data steps in the form of pre- or post-processing, modifications to the hive metadata, etc.</p> <p>Conclusion</p> <p>The i2b2 hive was found to be a useful cohort-selection tool for fulfilling common types of requests for research data, and especially in the estimation of initial cohort sizes. For another institution that might want to use the i2b2 hive for clinical research, we recommend that the institution would need to have structured, coded clinical data and metadata available that can be transformed to fit the logical data models of the i2b2 hive, strategies for extracting relevant clinical data from source systems, and the ability to perform substantial pre- and post-processing of these data.</p

    Decentralizing the Social Web

    Get PDF
    International audienceFor over a decade, standards bodies like the IETF and W3C have attempted to prevent the centralization of the Web via the use of open standards for 'permission-less innovation.' Yet today, these standards , from OAuth to RSS, seem to have failed to prevent the massive centralization of the Web at the hands of a few major corporations like Google and Facebook. We'll delve deep into the lessons of failed attempts to replace DNS like XRIs, identity systems like OpenID, and metadata formats like the Semantic Web, all of which were recuperated by centralized platforms like Facebook as Facebook Connect and the "Like" Button. Learning from the past, a new generation of blockchain standards and governance mechanisms may be our last, best chance to save the Web

    miRMaid: a unified programming interface for microRNA data resources

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>MicroRNAs (miRNAs) are endogenous small RNAs that play a key role in post-transcriptional regulation of gene expression in animals and plants. The number of known miRNAs has increased rapidly over the years. The current release (version 14.0) of miRBase, the central online repository for miRNA annotation, comprises over 10.000 miRNA precursors from 115 different species. Furthermore, a large number of decentralized online resources are now available, each contributing with important miRNA annotation and information.</p> <p>Results</p> <p>We have developed a software framework, designated here as miRMaid, with the goal of integrating miRNA data resources in a uniform web service interface that can be accessed and queried by researchers and, most importantly, by computers. miRMaid is built around data from miRBase and is designed to follow the official miRBase data releases. It exposes miRBase data as inter-connected web services. Third-party miRNA data resources can be modularly integrated as miRMaid plugins or they can loosely couple with miRMaid as individual entities in the World Wide Web. miRMaid is available as a public web service but is also easily installed as a local application. The software framework is freely available under the LGPL open source license for academic and commercial use.</p> <p>Conclusion</p> <p>miRMaid is an intuitive and modular software platform designed to unify miRBase and independent miRNA data resources. It enables miRNA researchers to computationally address complex questions involving the multitude of miRNA data resources. Furthermore, miRMaid constitutes a basic framework for further programming in which microRNA-interested bioinformaticians can readily develop their own tools and data sources.</p

    Integrating sequence and structural biology with DAS.

    Get PDF
    BACKGROUND: The Distributed Annotation System (DAS) is a network protocol for exchanging biological data. It is frequently used to share annotations of genomes and protein sequence. RESULTS: Here we present several extensions to the current DAS 1.5 protocol. These provide new commands to share alignments, three dimensional molecular structure data, add the possibility for registration and discovery of DAS servers, and provide a convention how to provide different types of data plots. We present examples of web sites and applications that use the new extensions. We operate a public registry of DAS sources, which now includes entries for more than 250 distinct sources. CONCLUSION: Our DAS extensions are essential for the management of the growing number of services and exchange of diverse biological data sets. In addition the extensions allow new types of applications to be developed and scientific questions to be addressed. The registry of DAS sources is available at http://www.dasregistry.org.RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are
    corecore